Model Generator
The Model Generator dialog provides options — architectures, model types, input dimensions, and others — for generating new deep learning models for denoising, semantic segmentation, and super-resolution.
Click the New button on the Model Overview panel to open the Model Generator dialog, shown below.
Model Generator dialog
|
|
Description |
|---|---|
|
Show architectures for |
Lets you filter the available architectures to those recommended for segmentation, super-resolution, and denoising. Semantic Segmentation… Filters the Architecture list to models best suited for semantic segmentation, which is the process of associating each pixel of an image with a class label, such as a material phase or anatomical feature. Super-resolution… Filters the Architecture list to models best suited for super-resolution. Denoising… Filters the Architecture list to models best suited for denoising. |
|
Architecture |
Lists the default models supplied with the Deep Learning Tool. Architectures can be filtered by type. |
|
Architecture description |
Provides a short description of the selected architecture and a link for further information (see also Deep Learning Architectures). |
|
Model type |
Lets you choose the type of deep learning model — Regression or Semantic segmentation — that you need to generate. Regression… Regression model types are suitable for super-resolution and denoising. Semantic segmentation… Semantic segmentation model types are suitable for binary and multi-class segmentation tasks. |
|
Class Count |
Available only for semantic segmentation model types, this parameter lets you enter the number of classes required. The minimum number of classes is '2', which would be for a binary segmentation task, while the maximum is '20' for multi-class segmentations. |
|
Input count |
Lets you include multiple inputs for training. For example, when you are working with data from simultaneous image acquisition systems you might want to select each modality as an input. |
|
Input Dimension |
Lets you choose to train slice-by-slice (2D) or to allow training over multiple slices (3D).
Note Not all data is suitable for training in 3D. For example, in cases in which features are not consistent over multiple slices. |
|
Name |
Lets you enter a name for the generated model. Note Names are automatically formatted as: |
|
Description |
Lets you enter a description of your model. |
|
Parameters |
Lists the hyperparameters associated with the selected architecture and the default values for each. Note Refer to the documents referenced in the Architecture Description box for information about the implemented architectures and their associated parameters. |
A limited number of the basic parameters for Deep Learning architectures are available for editing in the Model Generator. These are listed below. You can also edit the basic and advanced training parameters of a Deep Learning model after it is generated.
Refer to the documents referenced in Deep Learning Architectures for more information about the editable parameters available for the implemented architectures.
|
|
Description |
|---|---|
|
Attention U-Net |
Depth level… Depth of the network, as determined by the number of pooling layers. Initial filter count… Filter count at the first convolution layer. |
|
Autoencoder |
Initial filter count… Filter count at the first convolutional layer. Kernel size… Convolutional filters kernel size. Pooling size… Pooling window size. |
|
BiSeNet |
Patch size… Size of the input patches. |
|
DeepLabV3+ |
Backbone… Backbone to use — 'Xception' or 'MobileNetV2'. Patch size… Fixed size of the input patches. Output stride… Ratio of the image size to the encoder output size. |
|
EDSR |
Scale… Ratio of the input size to the output size. Patch size… Fixed size of the input patches. Filter count… Filter count at each convolution layer. ResNet block count… The number of times to repeat ResNet blocks. Use Tanh activation… Determines if Tanh activation will be applied — True or False. |
|
FC-DenseNet |
Model type… Model variation to be generated — FC-DenseNet56, FC-DenseNet67, or FC-DenseNet103. |
|
LinkNet |
Patch size… Fixed size of the input patches. Initial filter count… Filter count at the first convolution layer. |
|
Noise2Noise |
Initial filter count… Filter count at the first convolution layer. |
|
Noise2Noise_SRResNet |
Filter count… Filter count at each convolution layer. ResNet block count… The number of times to repeat ResNet blocks. |
|
PSPNet |
Backbone… Backbone to use — ResNet50 or ResNet101. Patch size… Fixed size of the input patches. Filter count… Filter count at each convolution layer. |
|
Sensor3D |
Depth level… Depth of the network, as determined by the number of pooling layers, and patch size. Initial filter count… Filter count at the first convolution layer. |
|
U-Net |
Depth level… Depth of the network, as determined by the number of pooling layers. Initial filter count… Filter count at the first convolution layer. |
|
U-Net 3D |
Topology… The topology of the model. Initial filter count… Filter count at the first convolution layer. Use batch normalization… Determines if batch normalization will be applied — True or False. |
|
U-Net++ |
Depth level… Depth of the network, as determined by the number of pooling layers. Initial filter count… Filter count at the first convolution layer. |
|
WDSR |
Model type… Model variation to be generated — WDSR-A or WDSR-B. Scale… Ratio of the input size to the output size. Patch size…Fixed size of the input patches. Filter count… Filter count at each convolution layer. ResNet block count… The number of times to repeat ResNet blocks. ResNet block expansion… The ratio to multiple the number of filters in the ResNet block expansion layer. |
Refer to the following topics for information about generating models for specific tasks:
